(Matt 'Ka-Blah' Haas & Troy 'Don't Touch Me' Makous) The Key: Bias in polls can be found during many stages of a poll whether it is the way the question is asked, the presentation of the results, or the use of the final results to convey an inaccurate point.
The “very wording of questions can make major differences in the results” (NCPP) of the poll. This can be seen in an example of how the same healthcare poll was asked in two different ways, and got two different results:
Survey USA Poll: In any health care proposal, how important do you feel it is to give people a choice of both a public plan administered by the federal government and a private plan for their health insurance – extremely important, quite important, not that important, or not at all important? NBC/WSJ: Would you favor or oppose creating a public health care plan administered by the federal government that would compete directly with private health insurance companies? Results:
Survey USA: 70% said it was important or extremely important
NBC/WSJ: 46% were in favor and 48% opposed
As we see here the wording of the question changes the results of the poll. In the Survey USA poll, the question leaned toward a public plan by asking people for a choice. The NBC Poll leaned more away from a public plan by noting that the government would compete with the insurance companies. The results matched each poll accordingly.
REASONS FOR BIAS
For one, the sponsor of the poll must be looked at. There have been “polls conducted for interest groups that were performed by otherwise reputable pollsters, but were outrageously biased when their client wanted a certain result. It doesn't mean they lied about the results they got, but they constructed the poll in such a way as to maximize certain responses” (Walden).
There are other reasons for bias as well. It is not as prevalent in the mainstream polls like Rasmussen and Gallup, but many polls may have unreliable people asking questions. The National Council on Public Polls warns, “Be particularly careful of polls conducted by untrained and unsupervised college students. There have been several cases where the results were at least in part reported by the students without conducting any survey at all.”
Bias from Those Being Surveyed
Bias also comes in who answers the polls. One of the things that is "almost never reported in journalistic accounts" (Walden) is the percentage of people who actually respond to the polls. More importantly, one must look at the trends in the people who do and do not answer the polls. For example, “The hardest person to get to participate [in a poll] is a young African-American man” (Walden). One could see how this would be an important fact in predicting the outcome of certain events, like the 2008 Presidential Election of Obama, where “96 percent of black voters supported Obama and constituted 13 percent of the electorate” (Politico).
To take this a step further, one can also see that the polling medium is sometimes skewed, as seen in how internet polling is disproportional to the public: “Among the 65-plus age group in the UK, more than 60 percent have never used the internet” (Sexton). This is not very representative of the public because more than half of the elderly in the UK are not polled in online surveys.
SOCIAL DESIRABILITY
Paul Walden explains this issue as being whether or not the queestions is, "something to which people know what the socially desirable answer is" (Walden). This explains the fundamental idea that people are more likely to conform to the majority. Essentially, people are willing to either lie or accept a choice based on the fact that they will side with the, as Walden states it, "public opinion".
The poll results are very often generalized in their portrayal. Like a map with no key, many polls can give the viewer a wrong impression. This can be done by combining answers like “extremely disagree” and “somewhat disagree” into one bar on a graph saying “disagree.” If 10 % strongly disagree while 60% somewhat disagree, this graph can be misleading. A similar example is seen below.
In order to use a graph to accurately display poll results, the numbers should be displayed in a clear way that shows all the possible answers to the polling question. Since the percentages do not add up to 100%, the graphs should have noted response percentages for each date. While the graph above combines answers (seen as the sub-catagories 'Israel' and 'Palestine') and intentionally leaves out information, the graph below displays what a graph should look like (at minimum) when displaying statistics aquired through polling.
For this graph there are no graphical changes; the percentages are at the correct positions and every option is included. This is the type of graph that represents its poll result correctly.
While the graph above is displayed in a clear way, it is still not a very commanding use of the information gathered. In order for a graph to perfectly display a poll, it should include all of what is in the graph above and a few other very imporant things. The first of these should who took the pole. It is important that the person reading the graph knows where the poll came from. The next fact that should be displayed is the sample size. While most graphs do not do this in order to hide the small sample taken, a graph with a greater sample will appeal to a greater athority. When looking at a poll's graph, it is important to remember that the more information displayed about the pole and it's making, the better.
The portrayal of results in the polls leads to the next stage of bias in polls. This bias is that of people lobbying for a side who use the polls misleadingly to their advantage. An example of the way polls can be skewed this way are seen in how in February 2010
“Sen. Harry Reid cited a poll that said 58 percent would be ‘angry or disappointed’ if health care overhaul doesn’t pass. True, but respondents in the poll were also split 43-43 on whether they supported the legislation that is currently being proposed” (Fact Check).
This statement shows that the polls can be warped to fit the needs of the people using them. In this case, Reid cited the part of the poll that helped his case, while the more divided part of the case was ignored.
This misleading use of polls occurs frequently. However, oversimplification and assumption can backfire in an argument. This very well could be why organizations like Factcheck are around, since they search for the hidden area in these semi-true statements.
This relates to a very common tool used today, which is the inflating of negative actions or ideas into something they are not. The public often sees oversimplification and misleading campaigning in political television advertisements against candidates where one accuses the other of horrific acts. The use of polls, though mostly helpful, can also be used as another method for people to make a point through embellishment. In these cases, the viewer should be wary of the possibility that the candidates are leaving out part of the issue.
You know what the polls do now, here's what to ask yourself:
First, do a background check:
Who did the poll? (Is this a well known, credible source? Does pre-existing bias exist?)
Who paid for the poll? (This shows who's trying to get the information that is displayed out into the public's eye)
Who? How many? (Who are the people taking this survey? Is there an adaquately sized sample?)
How old is the poll? (Is it even still relevant or current?)
Next, look at the poll itself:
Do the numbers add up?/Are all the answers disaplyed? (Make sure the votes/percentages add up, and that everything is in place)
Is the wording fair? (Are there more choices to favor one side? Are the choices are quick, clear, and asked without opinions? etc...)
How is it presented? (If there's a visual, is it displayed in a special way? Does the visual make one thing more attractive? etc....)
What conclusion is the presenter drawing? (Is it a stretch? Does it fit the numbers?)
Finally, look for other polls of the same question:
Are the answers similar? (Without running through all of the above, do mutliple other polls show similar results to the origial in question?)
If you get through all of these questions, then the poll is usable.
In conclusion to all the information above, it can be seen that polls are a powerful tool in order to find out how any group of people stands on a certain question. While this is certainly true, the polls found in the media, such as opinon polls on news shows, are full of bias and should not be taken at fact value. Knowing the information above will allow someone to recognize bias, whether is found in the way the question is asked, the presentation (graph or otherwise) of the results, or the use of the results. Recognizing bias will be the key to decifering information.
Sources Used:
“American Israeli Support In 2004.” The Israel Project. N.p., Jan. 2004. Web. 26 Apr. 2010. [[http://www.theisraelproject.org/atf/cf/%7B84DC5887-741E-4056-8D91-A389164BC94E%7D/1416poll1.jpg]].
“Are Cops More Likely to Pull Over Black or Mexican People?” Polls Boutique. N.p., n.d. Web. 26 Apr. 2010. [[http://www.pollsb.com/polls/p5470-cops_prone_pull_black_mexican_people]].
“Current Pole.” Staffordshire Local Criminal Justice Board. N.p., 2 June 2008. Web. 26 Apr. 2010. [[http://lcjb.cjsonline.gov.uk/polls.asp?area=33]].
Gawiser, Sheldon R, and Evans Witt. “20 Questions A Jounralist Should Ask About Poll Results.” NCPP.org. N.p., n.d. Web. 22 Apr. 2010. [[http://www.ncpp.org/?q=node/4]].
Graham, Heather. “Heather Graham Teaches Us About Polls.” Factcheck.org. N.p., n.d. Web. 3 Oct. 2009. [[http://www.factcheck.org/2009/10/heather-graham-teaches-us-about-polls/]].
Jackson, Brooks, et al. “Health Care Summit Squabbles.” Factcheck.org. N.p., 2 Mar. 2010. Web. 26 Apr. 2010. <http://factcheck.org/2010/02/health-care-summit-squabbles/>.
Kuhn, David Paul. “Exit Polls: How Obama Won.” Politico.com. N.p., 5 Nov. 2008. Web. 22 Apr. 2010. [[http://www.politico.com/news/stories/1108/15297.html]].
Sexton, Renard. “Selection Bias in UK Polling (Part 2): Internet Polling .” FiveThirtyEight.com. N.p., 18 Apr. 2010. Web. 22 Apr. 2010. [[http://www.fivethirtyeight.com/search/label/pollsters]].
Waldman, Paul. “RE: What Makes a Good Poll.” Message to Paul Wright. 5 May 2003. E-mail.
The Key: Bias in polls can be found during many stages of a poll whether it is the way the question is asked, the presentation of the results, or the use of the final results to convey an inaccurate point.
The “very wording of questions can make major differences in the results” (NCPP) of the poll. This can be seen in an example of how the same healthcare poll was asked in two different ways, and got two different results:
Survey USA Poll:
In any health care proposal, how important do you feel it is to give people a choice of both a public plan administered by the federal government and a private plan for their health insurance – extremely important, quite important, not that important, or not at all important?
NBC/WSJ: Would you favor or oppose creating a public health care plan administered by the federal government that would compete directly with private health insurance companies?
Results:
Survey USA: 70% said it was important or extremely important
NBC/WSJ: 46% were in favor and 48% opposed
As we see here the wording of the question changes the results of the poll. In the Survey USA poll, the question leaned toward a public plan by asking people for a choice. The NBC Poll leaned more away from a public plan by noting that the government would compete with the insurance companies. The results matched each poll accordingly.
REASONS FOR BIAS
For one, the sponsor of the poll must be looked at. There have been “polls conducted for interest groups that were performed by otherwise reputable pollsters, but were outrageously biased when their client wanted a certain result. It doesn't mean they lied about the results they got, but they constructed the poll in such a way as to maximize certain responses” (Walden).
There are other reasons for bias as well. It is not as prevalent in the mainstream polls like Rasmussen and Gallup, but many polls may have unreliable people asking questions. The National Council on Public Polls warns, “Be particularly careful of polls conducted by untrained and unsupervised college students. There have been several cases where the results were at least in part reported by the students without conducting any survey at all.”
Bias from Those Being Surveyed
Bias also comes in who answers the polls. One of the things that is "almost never reported in journalistic accounts" (Walden) is the percentage of people who actually respond to the polls. More importantly, one must look at the trends in the people who do and do not answer the polls. For example, “The hardest person to get to participate [in a poll] is a young African-American man” (Walden). One could see how this would be an important fact in predicting the outcome of certain events, like the 2008 Presidential Election of Obama, where “96 percent of black voters supported Obama and constituted 13 percent of the electorate” (Politico).
To take this a step further, one can also see that the polling medium is sometimes skewed, as seen in how internet polling is disproportional to the public: “Among the 65-plus age group in the UK, more than 60 percent have never used the internet” (Sexton). This is not very representative of the public because more than half of the elderly in the UK are not polled in online surveys.
SOCIAL DESIRABILITY
Paul Walden explains this issue as being whether or not the queestions is, "something to which people know what the socially desirable answer is" (Walden). This explains the fundamental idea that people are more likely to conform to the majority. Essentially, people are willing to either lie or accept a choice based on the fact that they will side with the, as Walden states it, "public opinion".
The poll results are very often generalized in their portrayal. Like a map with no key, many polls can give the viewer a wrong impression. This can be done by combining answers like “extremely disagree” and “somewhat disagree” into one bar on a graph saying “disagree.” If 10 % strongly disagree while 60% somewhat disagree, this graph can be misleading. A similar example is seen below.
In order to use a graph to accurately display poll results, the numbers should be displayed in a clear way that shows all the possible answers to the polling question. Since the percentages do not add up to 100%, the graphs should have noted response percentages for each date. While the graph above combines answers (seen as the sub-catagories 'Israel' and 'Palestine') and intentionally leaves out information, the graph below displays what a graph should look like (at minimum) when displaying statistics aquired through polling.
For this graph there are no graphical changes; the percentages are at the correct positions and every option is included. This is the type of graph that represents its poll result correctly.
While the graph above is displayed in a clear way, it is still not a very commanding use of the information gathered. In order for a graph to perfectly display a poll, it should include all of what is in the graph above and a few other very imporant things. The first of these should who took the pole. It is important that the person reading the graph knows where the poll came from. The next fact that should be displayed is the sample size. While most graphs do not do this in order to hide the small sample taken, a graph with a greater sample will appeal to a greater athority. When looking at a poll's graph, it is important to remember that the more information displayed about the pole and it's making, the better.
The portrayal of results in the polls leads to the next stage of bias in polls. This bias is that of people lobbying for a side who use the polls misleadingly to their advantage. An example of the way polls can be skewed this way are seen in how in February 2010
“Sen. Harry Reid cited a poll that said 58 percent would be ‘angry or disappointed’ if health care overhaul doesn’t pass. True, but respondents in the poll were also split 43-43 on whether they supported the legislation that is currently being proposed” (Fact Check).
This statement shows that the polls can be warped to fit the needs of the people using them. In this case, Reid cited the part of the poll that helped his case, while the more divided part of the case was ignored.
This misleading use of polls occurs frequently. However, oversimplification and assumption can backfire in an argument. This very well could be why organizations like Factcheck are around, since they search for the hidden area in these semi-true statements.
This relates to a very common tool used today, which is the inflating of negative actions or ideas into something they are not. The public often sees oversimplification and misleading campaigning in political television advertisements against candidates where one accuses the other of horrific acts. The use of polls, though mostly helpful, can also be used as another method for people to make a point through embellishment. In these cases, the viewer should be wary of the possibility that the candidates are leaving out part of the issue.
You know what the polls do now, here's what to ask yourself:
First, do a background check:
Next, look at the poll itself:
Finally, look for other polls of the same question:
If you get through all of these questions, then the poll is usable.
In conclusion to all the information above, it can be seen that polls are a powerful tool in order to find out how any group of people stands on a certain question. While this is certainly true, the polls found in the media, such as opinon polls on news shows, are full of bias and should not be taken at fact value. Knowing the information above will allow someone to recognize bias, whether is found in the way the question is asked, the presentation (graph or otherwise) of the results, or the use of the results. Recognizing bias will be the key to decifering information.
Sources Used:
“American Israeli Support In 2004.” The Israel Project. N.p., Jan. 2004. Web. 26 Apr. 2010. [[http://www.theisraelproject.org/atf/cf/%7B84DC5887-741E-4056-8D91-A389164BC94E%7D/1416poll1.jpg]].
“Are Cops More Likely to Pull Over Black or Mexican People?” Polls Boutique. N.p., n.d. Web. 26 Apr. 2010. [[http://www.pollsb.com/polls/p5470-cops_prone_pull_black_mexican_people]].
“Current Pole.” Staffordshire Local Criminal Justice Board. N.p., 2 June 2008. Web. 26 Apr. 2010. [[http://lcjb.cjsonline.gov.uk/polls.asp?area=33]].
Gawiser, Sheldon R, and Evans Witt. “20 Questions A Jounralist Should Ask About Poll Results.” NCPP.org. N.p., n.d. Web. 22 Apr. 2010. [[http://www.ncpp.org/?q=node/4]].
Graham, Heather. “Heather Graham Teaches Us About Polls.” Factcheck.org. N.p., n.d. Web. 3 Oct. 2009. [[http://www.factcheck.org/2009/10/heather-graham-teaches-us-about-polls/]].
Jackson, Brooks, et al. “Health Care Summit Squabbles.” Factcheck.org. N.p., 2 Mar. 2010. Web. 26 Apr. 2010. <http://factcheck.org/2010/02/health-care-summit-squabbles/>.
Kuhn, David Paul. “Exit Polls: How Obama Won.” Politico.com. N.p., 5 Nov. 2008. Web. 22 Apr. 2010. [[http://www.politico.com/news/stories/1108/15297.html]].
Sexton, Renard. “Selection Bias in UK Polling (Part 2): Internet Polling .” FiveThirtyEight.com. N.p., 18 Apr. 2010. Web. 22 Apr. 2010. [[http://www.fivethirtyeight.com/search/label/pollsters]].
Waldman, Paul. “RE: What Makes a Good Poll.” Message to Paul Wright. 5 May 2003. E-mail.